The issue of explainability for autonomous systems is becoming increasingly prominent. Several researchers and organisations have advocated the provision of a “Why did you do that?” button which allows a user to interrogate a robot about its choices and actions. We take previous work on debugging cognitive agent programs and apply it to the question of supplying explanations to end users in the form of answers to why-questions. These previous approaches are based on the generation of a trace of events in the execution of the program and then answering why-questions using the trace. We implemented this framework in the agent infrastructure layer and, in particular, the Gwendolen programming language it supports – extending it in the process ...
Humans are increasingly relying on complex systems that heavily adopts Artificial Intelligence (AI) ...
Transparency in HRI describes the method of making the current state of a robot or intelligent agent...
Typically, humans interact with a humanoid robot with apprehension. This lack of trust can seriously...
© 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserv...
This data set contains examples used to test an initial implementation of Explainability for Gwendol...
A fundamental challenge in robotics is to reason with incomplete domain knowledge to explain unexpec...
We describe a stance towards the generation of explanations in AI agents that is both human-centered...
International audienceThe XAI concept was launched by the DARPA in 2016 in the context of model lear...
Transparency of robot behaviors increases efficiency and quality of interactions with humans. To inc...
International audience12th International Conference on Agents and Artificial Intelligence (ICAART 20...
Explainability is assumed to be a key factor for the adoption of Artificial Intelligence systems in ...
International audienceThe ability to explain actions and decisions is often regarded as a basic ingr...
Stange S. Tell Me Why (and What)! Self-Explanations for Autonomous Social Robot Behavior. Bielefeld:...
A fundamental challenge in robotics is to reason with incomplete domain knowledge to explain unexpec...
Recent developments in explainable artificial intelligence promise the potential to transform human-...
Humans are increasingly relying on complex systems that heavily adopts Artificial Intelligence (AI) ...
Transparency in HRI describes the method of making the current state of a robot or intelligent agent...
Typically, humans interact with a humanoid robot with apprehension. This lack of trust can seriously...
© 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserv...
This data set contains examples used to test an initial implementation of Explainability for Gwendol...
A fundamental challenge in robotics is to reason with incomplete domain knowledge to explain unexpec...
We describe a stance towards the generation of explanations in AI agents that is both human-centered...
International audienceThe XAI concept was launched by the DARPA in 2016 in the context of model lear...
Transparency of robot behaviors increases efficiency and quality of interactions with humans. To inc...
International audience12th International Conference on Agents and Artificial Intelligence (ICAART 20...
Explainability is assumed to be a key factor for the adoption of Artificial Intelligence systems in ...
International audienceThe ability to explain actions and decisions is often regarded as a basic ingr...
Stange S. Tell Me Why (and What)! Self-Explanations for Autonomous Social Robot Behavior. Bielefeld:...
A fundamental challenge in robotics is to reason with incomplete domain knowledge to explain unexpec...
Recent developments in explainable artificial intelligence promise the potential to transform human-...
Humans are increasingly relying on complex systems that heavily adopts Artificial Intelligence (AI) ...
Transparency in HRI describes the method of making the current state of a robot or intelligent agent...
Typically, humans interact with a humanoid robot with apprehension. This lack of trust can seriously...